8,139 research outputs found

    Alumni News: Winter 2003

    Get PDF
    Newsletter for Boston University School of Medicine alumni

    Refraction of sound by jet flow and jet temperature. Extension of temperature range parameters and development of theory

    Get PDF
    Refraction of sound field of omnidirectional pure tone point source by temperature and velocity fields of 3/4-inch air or nitrogen je

    Cosmologies with a time dependent vacuum

    Full text link
    The idea that the cosmological term, Lambda, should be a time dependent quantity in cosmology is a most natural one. It is difficult to conceive an expanding universe with a strictly constant vacuum energy density, namely one that has remained immutable since the origin of time. A smoothly evolving vacuum energy density that inherits its time-dependence from cosmological functions, such as the Hubble rate or the scale factor, is not only a qualitatively more plausible and intuitive idea, but is also suggested by fundamental physics, in particular by quantum field theory (QFT) in curved space-time. To implement this notion, is not strictly necessary to resort to ad hoc scalar fields, as usually done in the literature (e.g. in quintessence formulations and the like). A "running" Lambda term can be expected on very similar grounds as one expects (and observes) the running of couplings and masses with a physical energy scale in QFT. Furthermore, the experimental evidence that the equation of state of the dark energy could be evolving with time/redshift (including the possibility that it might currently behave phantom-like) suggests that a time-variable Lambda term (possibly accompanied by a variable Newton's gravitational coupling G=G(t)) could account in a natural way for all these features. Remarkably enough, a class of these models (the "new cosmon") could even be the clue for solving the old cosmological constant problem, including the coincidence problem.Comment: LaTeX, 15 pages, 4 figure

    What is there in the black box of dark energy: variable cosmological parameters or multiple (interacting) components?

    Get PDF
    The coincidence problems and other dynamical features of dark energy are studied in cosmological models with variable cosmological parameters and in models with the composite dark energy. It is found that many of the problems usually considered to be cosmological coincidences can be explained or significantly alleviated in the aforementioned models.Comment: 6 pages, 1 figure, talk given at IRGAC2006 (Barcelona, July 11-15, 2006), to appear in J. Phys.

    Hubble expansion and structure formation in the "running FLRW model" of the cosmic evolution

    Full text link
    A new class of FLRW cosmological models with time-evolving fundamental parameters should emerge naturally from a description of the expansion of the universe based on the first principles of quantum field theory and string theory. Within this general paradigm, one expects that both the gravitational Newton's coupling, G, and the cosmological term, Lambda, should not be strictly constant but appear rather as smooth functions of the Hubble rate. This scenario ("running FLRW model") predicts, in a natural way, the existence of dynamical dark energy without invoking the participation of extraneous scalar fields. In this paper, we perform a detailed study of these models in the light of the latest cosmological data, which serves to illustrate the phenomenological viability of the new dark energy paradigm as a serious alternative to the traditional scalar field approaches. By performing a joint likelihood analysis of the recent SNIa data, the CMB shift parameter, and the BAOs traced by the Sloan Digital Sky Survey, we put tight constraints on the main cosmological parameters. Furthermore, we derive the theoretically predicted dark-matter halo mass function and the corresponding redshift distribution of cluster-size halos for the "running" models studied. Despite the fact that these models closely reproduce the standard LCDM Hubble expansion, their normalization of the perturbation's power-spectrum varies, imposing, in many cases, a significantly different cluster-size halo redshift distribution. This fact indicates that it should be relatively easy to distinguish between the "running" models and the LCDM cosmology using realistic future X-ray and Sunyaev-Zeldovich cluster surveys.Comment: Version published in JCAP 08 (2011) 007: 1+41 pages, 6 Figures, 1 Table. Typos corrected. Extended discussion on the computation of the linearly extrapolated density threshold above which structures collapse in time-varying vacuum models. One appendix, a few references and one figure adde

    Effective growth of matter density fluctuations in the running LCDM and LXCDM models

    Full text link
    We investigate the matter density fluctuations \delta\rho/\rho for two dark energy (DE) models in the literature in which the cosmological term \Lambda is a running parameter. In the first model, the running LCDM model, matter and DE exchange energy, whereas in the second model, the LXCDM model, the total DE and matter components are conserved separately. The LXCDM model was proposed as an interesting solution to the cosmic coincidence problem. It includes an extra dynamical component, the "cosmon" X, which interacts with the running \Lambda, but not with matter. In our analysis we make use of the current value of the linear bias parameter, b^2(0)= P_{GG}/P_{MM}, where P_{MM} ~ (\delta\rho/\rho)^2 is the present matter power spectrum and P_{GG} is the galaxy fluctuation power spectrum. The former can be computed within a given model, and the latter is found from the observed LSS data (at small z) obtained by the 2dF galaxy redshift survey. It is found that b^2(0)=1 within a 10% accuracy for the standard LCDM model. Adopting this limit for any DE model and using a method based on the effective equation of state for the DE, we can set a limit on the growth of matter density perturbations for the running LCDM model, the solution of which is known. This provides a good test of the procedure, which we then apply to the LXCDM model in order to determine the physical region of parameter space, compatible with the LSS data. In this region, the LXCDM model is consistent with known observations and provides at the same time a viable solution to the cosmic coincidence problem.Comment: LaTeX, 38 pages, 8 figures. Version accepted in JCA

    FRCM-to-masonry bonding behaviour in the case of curved surfaces: Experimental investigation

    Get PDF
    Fabric-reinforced cementitious matrix (FRCM) are composite materials more and more used for the reinforcement of masonry structures. The combination of high tensile strength fabrics (or meshes) with cementitious matrices, having good thixotropic capabilities and vapour permeability, makes such composites suitable for reinforcing a large number of masonry structures, including the one belonging to the historic heritage. FRCMs are bonded to the outer surfaces of structural masonry elements and, thanks to their adhesive capacity, bear much of the tensile stresses that unreinforced masonry cannot withstand. The effectiveness of such reinforcements, which is highly dependent on their ability to adhere to the masonry substrate, is generally investigated throughout specific experimental investigations (shear tests). Almost all the papers in the literature devoted to bond-slip analysis refer to the case of flat bonding surfaces, although these reinforcements are also widely used on curved structural elements such as arches and vaults. Therefore, this paper reports and examines the results of an extensive experimental program concerning the behavior of FRCM systems applied on curved masonry specimens. The results point out the influence of both curvature and reinforcement position (intrados or extrados) on the response of specimens in terms of bearing capacity, failure mode and post-peak response

    Data-Driven Stability Assessment of Multilayer Long Short-Term Memory Networks

    Get PDF
    Recurrent Neural Networks (RNNs) are increasingly being used for model identification, forecasting and control. When identifying physical models with unknown mathematical knowledge of the system, Nonlinear AutoRegressive models with eXogenous inputs (NARX) or Nonlinear AutoRegressive Moving-Average models with eXogenous inputs (NARMAX) methods are typically used. In the context of data-driven control, machine learning algorithms are proven to have comparable performances to advanced control techniques, but lack the properties of the traditional stability theory. This paper illustrates a method to prove a posteriori the stability of a generic neural network, showing its application to the state-of-the-art RNN architecture. The presented method relies on identifying the poles associated with the network designed starting from the input/output data. Providing a framework to guarantee the stability of any neural network architecture combined with the generalisability properties and applicability to different fields can significantly broaden their use in dynamic systems modelling and control
    • …
    corecore